Stochastic Proximal Gradient Descent for Nuclear Norm Regularization

نویسندگان

  • Lijun Zhang
  • Tianbao Yang
  • Rong Jin
  • Zhi-Hua Zhou
چکیده

In this paper, we utilize stochastic optimization to reduce the space complexity of convex composite optimization with a nuclear norm regularizer, where the variable is a matrix of size m × n. By constructing a low-rank estimate of the gradient, we propose an iterative algorithm based on stochastic proximal gradient descent (SPGD), and take the last iterate of SPGD as the final solution. The main advantage of the proposed algorithm is that its space complexity is O(m + n), in contrast, most of previous algorithms have a O(mn) space complexity. Theoretical analysis shows that it achieves O(log T/ √ T ) and O(log T/T ) convergence rates for general convex functions and strongly convex functions, respectively.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm Regularization

We describe novel subgradient methods for a broad class of matrix optimization problems involving nuclear norm regularization. Unlike existing approaches, our method executes very cheap iterations by combining low-rank stochastic subgradients with efficient incremental SVD updates, made possible by highly optimized and parallelizable dense linear algebra operations on small matrices. Our practi...

متن کامل

Composite Objective Mirror Descent

We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstorder algorithms, such as the projected gradient method, mirror descent, and forwardbackward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly use...

متن کامل

On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization

Motivated by various applications in machine learning, the problem of minimizing a convex smooth loss function with trace norm regularization has received much attention lately. Currently, a popular method for solving such problem is the proximal gradient method (PGM), which is known to have a sublinear rate of convergence. In this paper, we show that for a large class of loss functions, the co...

متن کامل

Stochastic Weighted Function Norm Regularization

Deep neural networks (DNNs) have become increasingly important due to their excellent empirical performance on a wide range of problems. However, regularization is generally achieved by indirect means, largely due to the complex set of functions defined by a network and the difficulty in measuring function complexity. There exists no method in the literature for additive regularization based on...

متن کامل

Seismic impedance inversion using l1-norm regularization and gradient descent methods

We consider numerical solution methods for seismic impedance inversion problems in this paper. The inversion process is ill-posed. To tackle the ill-posedness of the problem and take the sparsity of the reflectivity function into consideration, an l1 norm regularization model is established. In computation, a nonmonotone gradient descent method based on Rayleigh quotient for solving the minimiz...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1511.01664  شماره 

صفحات  -

تاریخ انتشار 2015